Recognizing useful named entities plays a vital role in medical information processing, which helps drive the development of medical area research. Deep learning methods have achieved good results in medical named entity recognition (NER). However, we find that existing methods face great challenges when dealing with the nested named entities. In this work, we propose a novel method, referred to as ASAC, to solve the dilemma caused by the nested phenomenon, in which the core idea is to model the dependency between different categories of entity recognition. The proposed method contains two key modules: the adaptive shared (AS) part and the attentive conditional random field (ACRF) module. The former part automatically assigns adaptive weights across each task to achieve optimal recognition accuracy in the multi-layer network. The latter module employs the attention operation to model the dependency between different entities. In this way, our model could learn better entity representations by capturing the implicit distinctions and relationships between different categories of entities. Extensive experiments on public datasets verify the effectiveness of our method. Besides, we also perform ablation analyses to deeply understand our methods.
translated by 谷歌翻译
在本文中,我们提出了一种新颖的,通用的数据驱动方法,用于伺服控制连续机器人的3-D形状,并嵌入了纤维bragg光栅(FBG)传感器。 3D形状感知和控制技术的发展对于连续机器人在手术干预中自主执行任务至关重要。但是,由于连续机器人的非线性特性,主要难度在于它们的建模,尤其是对于具有可变刚度的软机器人。为了解决这个问题,我们通过利用FBG形状反馈和神经网络(NNS)提出了一个新的健壮自适应控制器,该反馈和神经网络(NNS)可以在线估算连续机器人的未知模型,并说明了意外的干扰以及NN近似错误,该错误表现出适应性行为对适应性行为呈现没有先验数据探索的未建模系统。基于新的复合适应算法,Lyapunov理论证明了具有NNS学习参数的闭环系统的渐近收敛。为了验证所提出的方法,我们通过使用两个连续机器人进行了一项全面的实验研究,这些连续机器人都与多核FBG集成,包括机器人辅助结肠镜和多部分可扩展的软操纵剂。结果表明,在各种非结构化环境以及幻影实验中,我们的控制器的可行性,适应性和优越性。
translated by 谷歌翻译
半监督学习(SSL)通过利用大量未标记数据来增强有限标记的样品来改善模型的概括。但是,目前,流行的SSL评估协议通常受到计算机视觉(CV)任务的约束。此外,以前的工作通常从头开始训练深层神经网络,这是耗时且环境不友好的。为了解决上述问题,我们通过从简历,自然语言处理(NLP)和音频处理(AUDIO)中选择15种不同,具有挑战性和全面的任务来构建统一的SSL基准(USB),我们会系统地评估主导的SSL方法,以及开源的一个模块化和可扩展的代码库,以对这些SSL方法进行公平评估。我们进一步为简历任务提供了最新的神经模型的预训练版本,以使成本负担得起,以进行进一步调整。 USB启用对来自多个域的更多任务的单个SSL算法的评估,但成本较低。具体而言,在单个NVIDIA V100上,仅需要37个GPU天才能在USB中评估15个任务的FIXMATCH,而335 GPU天(除ImageNet以外的4个CV数据集中的279 GPU天)在使用典型协议的5个CV任务上需要进行5个CV任务。
translated by 谷歌翻译
二进制神经网络(BNN)是卷积神经网络(CNN)的极端量化版本,其所有功能和权重映射到仅1位。尽管BNN节省了大量的内存和计算需求以使CNN适用于边缘或移动设备,但由于二进制后的表示能力降低,BNN遭受了网络性能的下降。在本文中,我们提出了一个新的可更换且易于使用的卷积模块reponv,该模块reponv通过复制输入或沿通道维度的输出来增强特征地图,而不是$ \ beta $ times,而没有额外的参数和卷积计算费用。我们还定义了一组Reptran规则,可以在整个BNN模块中使用Repconv,例如二进制卷积,完全连接的层和批处理归一化。实验表明,在Reptran转换之后,一组高度引用的BNN与原始BNN版本相比,实现了普遍的性能。例如,Rep-Recu-Resnet-20的前1位准确性,即REPBCONV增强的RECU-RESNET-20,在CIFAR-10上达到了88.97%,比原始网络高1.47%。 Rep-Adambnn-Reactnet-A在Imagenet上获得了71.342%的TOP-1精度,这是BNN的最新结果。代码和型号可在以下网址提供:https://github.com/imfinethanks/rep_adambnn。
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
In the future, service robots are expected to be able to operate autonomously for long periods of time without human intervention. Many work striving for this goal have been emerging with the development of robotics, both hardware and software. Today we believe that an important underpinning of long-term robot autonomy is the ability of robots to learn on site and on-the-fly, especially when they are deployed in changing environments or need to traverse different environments. In this paper, we examine the problem of long-term autonomy from the perspective of robot learning, especially in an online way, and discuss in tandem its premise "data" and the subsequent "deployment".
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Remaining Useful Life (RUL) estimation plays a critical role in Prognostics and Health Management (PHM). Traditional machine health maintenance systems are often costly, requiring sufficient prior expertise, and are difficult to fit into highly complex and changing industrial scenarios. With the widespread deployment of sensors on industrial equipment, building the Industrial Internet of Things (IIoT) to interconnect these devices has become an inexorable trend in the development of the digital factory. Using the device's real-time operational data collected by IIoT to get the estimated RUL through the RUL prediction algorithm, the PHM system can develop proactive maintenance measures for the device, thus, reducing maintenance costs and decreasing failure times during operation. This paper carries out research into the remaining useful life prediction model for multi-sensor devices in the IIoT scenario. We investigated the mainstream RUL prediction models and summarized the basic steps of RUL prediction modeling in this scenario. On this basis, a data-driven approach for RUL estimation is proposed in this paper. It employs a Multi-Head Attention Mechanism to fuse the multi-dimensional time-series data output from multiple sensors, in which the attention on features is used to capture the interactions between features and attention on sequences is used to learn the weights of time steps. Then, the Long Short-Term Memory Network is applied to learn the features of time series. We evaluate the proposed model on two benchmark datasets (C-MAPSS and PHM08), and the results demonstrate that it outperforms the state-of-art models. Moreover, through the interpretability of the multi-head attention mechanism, the proposed model can provide a preliminary explanation of engine degradation. Therefore, this approach is promising for predictive maintenance in IIoT scenarios.
translated by 谷歌翻译
In this paper, we propose a novel variable-length estimation approach for shape sensing of extensible soft robots utilizing fiber Bragg gratings (FBGs). Shape reconstruction from FBG sensors has been increasingly developed for soft robots, while the narrow stretching range of FBG fiber makes it difficult to acquire accurate sensing results for extensible robots. Towards this limitation, we newly introduce an FBG-based length sensor by leveraging a rigid curved channel, through which FBGs are allowed to slide within the robot following its body extension/compression, hence we can search and match the FBGs with specific constant curvature in the fiber to determine the effective length. From the fusion with the above measurements, a model-free filtering technique is accordingly presented for simultaneous calibration of a variable-length model and temporally continuous length estimation of the robot, enabling its accurate shape sensing using solely FBGs. The performances of the proposed method have been experimentally evaluated on an extensible soft robot equipped with an FBG fiber in both free and unstructured environments. The results concerning dynamic accuracy and robustness of length estimation and shape sensing demonstrate the effectiveness of our approach.
translated by 谷歌翻译
Cyber intrusion attacks that compromise the users' critical and sensitive data are escalating in volume and intensity, especially with the growing connections between our daily life and the Internet. The large volume and high complexity of such intrusion attacks have impeded the effectiveness of most traditional defence techniques. While at the same time, the remarkable performance of the machine learning methods, especially deep learning, in computer vision, had garnered research interests from the cyber security community to further enhance and automate intrusion detections. However, the expensive data labeling and limitation of anomalous data make it challenging to train an intrusion detector in a fully supervised manner. Therefore, intrusion detection based on unsupervised anomaly detection is an important feature too. In this paper, we propose a three-stage deep learning anomaly detection based network intrusion attack detection framework. The framework comprises an integration of unsupervised (K-means clustering), semi-supervised (GANomaly) and supervised learning (CNN) algorithms. We then evaluated and showed the performance of our implemented framework on three benchmark datasets: NSL-KDD, CIC-IDS2018, and TON_IoT.
translated by 谷歌翻译